Scientific Computing in Rust Monthly #14
scientificcomputing.rs·7h
FlashAttention 4: Faster, Memory-Efficient Attention for LLMs
digitalocean.com·7h
Qdrant - Vector Database
qdrant.tech·20h
Why AI Needs GPUs and TPUs: The Hardware Behind LLMs
blog.bytebytego.com·2d
Co-optimization Approaches For Reliable and Efficient AI Acceleration (Peking University et al.)
semiengineering.com·1h
Loading...Loading more...